Goto

Collaborating Authors

 continuation algorithm


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: The authors re-explain regularization in optimization problems as a constraint of the type the parameters ${\bf w}$ must belong to the convex set $O$ where the convex set O is obtained as the convex hull of all the points of the form $g.v$ where $v$ is some fix vector, $g$ an element from a group and $.$ is a (linear) group action of element $g$ on vector $v$. More concretely, their main contributions are as follows. For example, the ball associated to the L1 norm can be explained as the convex hull of the points obtained by flipping the sign and permuting the components of the vector $(1,0,0,..,0)$; (B) they show that given a seed $v$ and a group action associated to a group $G$, the notion of $w$ is a member of the convex set $O_G(v)$ can be seen as $v$ is smaller than $w$ under a pre-order; (C) they show that if $-v$ belongs to convex set $O$ then $O$ can be seen as the ball of an atomic norm (as defined in Chandra et al.); (D) they show that the L1-sorted norm equals the dual of the norm associated to the signed-pertumation orbitope; (E) they show how to reinterpret the main steps of conditional and projected gradient algorithms in the language of orbitopes and give a procedure to compute projections onto orbitopes. Quality: There are no technical mistakes in the paper.


Reduced order modeling of parametrized systems through autoencoders and SINDy approach: continuation of periodic solutions

Conti, Paolo, Gobat, Giorgio, Fresca, Stefania, Manzoni, Andrea, Frangi, Attilio

arXiv.org Artificial Intelligence

However, the solution of parametrized, time-dependent systems of partial differential equations (PDEs) by means of full order models (FOMs) - such as the finite element method - may clash with time and computational budget restrictions. Moreover, using FOMs to explore different scenarios with varying initial conditions and parameter combinations might be a computationally prohibitive task, or even infeasible in several practical applications. Differently from the problem of estimating output quantities of interest that depend on the solution of the differential problem, the computation of the whole solution field is intrinsically high-dimensional, with additional difficulties related to the nonlinear and time-dependent nature of the problem. All these reasons drive the search of efficient, but accurate, reduced order models (ROMs). Among these, the reduced basis method [56, 33, 4] is a very well-known approach, exploiting, e.g., proper orthogonal decomposition (POD) to build a reduced space, either global or local [1, 55], to approximate the solution of the problem. However, despite their accuracy and mathematical, these techniques are in general intrusive [25]. Among machine and deep learning techniques widely used to build surrogate models or emulators to the solution of parametrized, nonlinear, time-dependent system of PDEs, autoencoder (AE) neural networks [27] have recently become a popular strategy because they allow to non-intrusively reduce dimensionality and unveil latent features directly from data streams, without accessing the FOM operators [26, 43, 48, 37]. Their success is due to the expressiveness capacity of neural networks [11, 44, 34], which enables outstanding performances in nonlinear compression and great flexibility in identifying coordinate transformations [46].